4 research outputs found

    Detecting trash and valuables with machine vision in passenger vehicles

    Get PDF
    The research conducted here will determine the possibility of implementing a machine vision based detection system to identify the presence of trash or valuables in passenger vehicles using a custom designed in-car camera module. The detection system was implemented to capture images of the rear seating compartment of a car intended to be used in shared vehicle fleets. Onboard processing of the image was done by a Raspberry Pi computer while the image classification was done by a remote server. Two vision based algorithmic models were created for the purpose of classifying the images: a convolutional neural network (CNN) and a background subtraction model. The CNN was a fine-tuned VGG16 model and it produced a final prediction accuracy of 91.43% on a batch of 140 test images. For the output analysis, a confusion matrix was used to identify the correlation between correct and false predictions, and the certainties of the three classes for each classified image were examined as well. The estimated execution time of the system from image capture to displaying the results ranged between 5.7 seconds and 11.5 seconds. The background subtraction model failed for the application here due to its inability to form a stable background estimate. The incorrect classifications of the CNN were evident due to the external sources of variation in the images such as extreme shadows and lack of contrast between the objects and its neighbouring background. Improvements in changing the camera location and expanding the training image set were proposed as possible future research

    Classification of Trash and Valuables with Machine Vision in Shared Cars

    No full text
    This study focused on the possibility of implementing a vision-based architecture to monitor and detect the presence of trash or valuables in shared cars. The system was introduced to take pictures of the rear seating area of a four-door passenger car. Image capture was performed with a stationary wide-angled camera unit, and image classification was conducted with a prediction model in a remote server. For classification, a convolutional neural network (CNN) in the form of a fine-tuned VGG16 model was developed. The CNN yielded an accuracy of 91.43% on a batch of 140 test images. To determine the correlation among the predictions, a confusion matrix was used, and in addition, for each predicted image, the certainty of the distinct output classes was examined. The execution time of the system, from capturing an image to displaying the results, ranged from 5.7 to 17.2 s. Misclassifications from the prediction model were observed in the results primarily due to the variation in ambient light levels and shadows within the images, which resulted in the target items lacking contrast with their neighbouring background. Developments pertaining to the modularity of the camera unit and expanding the dataset of training images are suggested for potential future research.Peer reviewe

    Architecture for determining the cleanliness in shared vehicles using an integrated machine vision and indoor air quality-monitoring system

    No full text
    Funding Information: This work is partially supported by EIT Urban Mobility. The funders had no role in study design, data collection, analysis, decision to publish, or preparation of the manuscript. Publisher Copyright: © 2023, The Author(s).In an attempt to mitigate emissions and road traffic, a significant interest has been recently noted in expanding the use of shared vehicles to replace private modes of transport. However, one outstanding issue has been the hesitancy of passengers to use shared vehicles due to the substandard levels of interior cleanliness, as a result of leftover items from previous users. The current research focuses on developing a novel prediction model using computer vision capable of detecting various types of trash and valuables from a vehicle interior in a timely manner to enhance ambience and passenger comfort. The interior state is captured by a stationary wide-angled camera unit located above the seating area. The acquired images are preprocessed to remove unwanted areas and subjected to a convolutional neural network (CNN) capable of predicting the type and location of leftover items. The algorithm was validated using data collected from two research vehicles under varying conditions of light and shadow levels. The experiments yielded an accuracy of 89% over distinct classes of leftover items and an accuracy of 91% among the general classes of trash and valuables. The average execution time was 65 s from image acquisition in the vehicle to displaying the results in a remote server. A custom dataset of 1379 raw images was also made publicly available for future development work. Additionally, an indoor air quality (IAQ) unit capable of detecting specific air pollutants inside the vehicle was implemented. Based on the pilots conducted for air quality monitoring within the vehicle cabin, an IAQ index was derived which corresponded to a 6-level scale in which each level was associated with the explicit state of interior odour. Future work will focus on integrating the two systems (item detection and air quality monitoring) explicitly to produce a discrete level of cleanliness. The current dataset will also be expanded by collecting data from real shared vehicles in operation.Peer reviewe

    Architecture for determining the cleanliness in shared vehicles using an integrated machine vision and indoor air quality-monitoring system

    Get PDF
    In an attempt to mitigate emissions and road traffic, a significant interest has been recently noted in expanding the use of shared vehicles to replace private modes of transport. However, one outstanding issue has been the hesitancy of passengers to use shared vehicles due to the substandard levels of interior cleanliness, as a result of leftover items from previous users. The current research focuses on developing a novel prediction model using computer vision capable of detecting various types of trash and valuables from a vehicle interior in a timely manner to enhance ambience and passenger comfort. The interior state is captured by a stationary wide-angled camera unit located above the seating area. The acquired images are preprocessed to remove unwanted areas and subjected to a convolutional neural network (CNN) capable of predicting the type and location of leftover items. The algorithm was validated using data collected from two research vehicles under varying conditions of light and shadow levels. The experiments yielded an accuracy of 89% over distinct classes of leftover items and an accuracy of 91% among the general classes of trash and valuables. The average execution time was 65 s from image acquisition in the vehicle to displaying the results in a remote server. A custom dataset of 1379 raw images was also made publicly available for future development work. Additionally, an indoor air quality (IAQ) unit capable of detecting specific air pollutants inside the vehicle was implemented. Based on the pilots conducted for air quality monitoring within the vehicle cabin, an IAQ index was derived which corresponded to a 6-level scale in which each level was associated with the explicit state of interior odour. Future work will focus on integrating the two systems (item detection and air quality monitoring) explicitly to produce a discrete level of cleanliness. The current dataset will also be expanded by collecting data from real shared vehicles in operation.Peer reviewe
    corecore